Skip to content

feat(RHOAIENG-50753): add LDAP user and group autocomplete for workspace sharing#805

Open
mprpic wants to merge 4 commits intomainfrom
ldap-autocomplete
Open

feat(RHOAIENG-50753): add LDAP user and group autocomplete for workspace sharing#805
mprpic wants to merge 4 commits intomainfrom
ldap-autocomplete

Conversation

@mprpic
Copy link
Contributor

@mprpic mprpic commented Mar 4, 2026

Integrate LDAP to provide autocomplete when sharing workspaces. The backend queries LDAP with substring matching on uid, givenName, and sn for users, and prefix matching on cn for groups. Results are cached in-memory with a 5-minute TTL. The frontend replaces the plain text input with an autocomplete dropdown that debounces queries and supports keyboard navigation. LDAP is optional and gracefully degrades to manual input when unconfigured or unreachable.

@github-actions

This comment has been minimized.

@mprpic mprpic force-pushed the ldap-autocomplete branch from a316335 to 18b453a Compare March 5, 2026 02:00
@github-actions
Copy link
Contributor

github-actions bot commented Mar 5, 2026

Ambient Code Platform

Kubernetes-native AI automation platform that orchestrates agentic sessions through containerized microservices. Built with Go (backend, operator), NextJS + Shadcn (frontend), Python (runner), and Kubernetes CRDs.

Technical artifacts still use "vteam" for backward compatibility.

Structure

  • components/backend/ - Go REST API (Gin), manages K8s Custom Resources with multi-tenant project isolation
  • components/frontend/ - NextJS web UI for session management and monitoring
  • components/operator/ - Go Kubernetes controller, watches CRDs and creates Jobs
  • components/runners/ambient-runner/ - Python runner executing Claude Code CLI in Job pods
  • components/ambient-cli/ - Go CLI (acpctl), manages agentic sessions from the command line
  • components/public-api/ - Stateless HTTP gateway, proxies to backend (no direct K8s access)
  • components/manifests/ - Kustomize-based deployment manifests and overlays
  • e2e/ - Cypress end-to-end tests
  • docs/ - Astro Starlight documentation site

Key Files

  • CRD definitions: components/manifests/base/crds/agenticsessions-crd.yaml, projectsettings-crd.yaml
  • Session lifecycle: components/backend/handlers/sessions.go, components/operator/internal/handlers/sessions.go
  • Auth & RBAC middleware: components/backend/handlers/middleware.go
  • K8s client init: components/operator/internal/config/config.go
  • Runner entry point: components/runners/ambient-runner/main.py
  • Route registration: components/backend/routes.go
  • Frontend API layer: components/frontend/src/services/api/, src/services/queries/

Session Flow

User Creates Session → Backend Creates CR → Operator Spawns Job →
Pod Runs Claude CLI → Results Stored in CR → UI Displays Progress

Commands

make build-all                # Build all container images
make deploy                   # Deploy to cluster
make test                     # Run tests
make lint                     # Lint code
make kind-up                  # Start local Kind cluster
make test-e2e-local           # Run E2E tests against Kind

Per-Component

# Backend / Operator (Go)
cd components/backend && gofmt -l . && go vet ./... && golangci-lint run
cd components/operator && gofmt -l . && go vet ./... && golangci-lint run

# Frontend
cd components/frontend && npm run build  # Must pass with 0 errors, 0 warnings

# Runner (Python)
cd components/runners/ambient-runner && uv venv && uv pip install -e .

# Docs
cd docs && npm run dev  # http://localhost:4321

Critical Context

  • User token auth required: All user-facing API ops use GetK8sClientsForRequest(c), never the backend service account
  • OwnerReferences on all child resources: Jobs, Secrets, PVCs must have controller owner refs
  • No panic() in production: Return explicit fmt.Errorf with context
  • No any types in frontend: Use proper types, unknown, or generic constraints
  • Conventional commits: Squashed on merge to main

Pre-commit Hooks

The project uses the pre-commit framework to run linters locally before every commit. Configuration lives in .pre-commit-config.yaml.

Install

make setup-hooks

What Runs

On every git commit:

Hook Scope
trailing-whitespace, end-of-file-fixer, check-yaml, check-added-large-files, check-merge-conflict, detect-private-key All files
ruff-format, ruff (check + fix) Python (runners, scripts)
gofmt, go vet, golangci-lint Go (backend, operator, public-api — per-module)
eslint Frontend TypeScript/JavaScript
branch-protection Blocks commits to main/master/production

On every git push:

Hook Scope
push-protection Blocks pushes to main/master/production

Run Manually

make lint                                    # All hooks, all files
pre-commit run gofmt-check --all-files       # Single hook
pre-commit run --files path/to/file.go       # Single file

Skip Hooks

git commit --no-verify    # Skip pre-commit hooks
git push --no-verify      # Skip pre-push hooks

Notes

  • Go and ESLint wrappers (scripts/pre-commit/) skip gracefully if the toolchain is not installed
  • tsc --noEmit and npm run build are not included (slow; CI gates on them)
  • Branch/push protection scripts remain in scripts/git-hooks/ and are invoked by pre-commit

More Info

See BOOKMARKS.md for architecture decisions, development context, code patterns, and component-specific guides.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 5, 2026

Claude Code Review

Summary

This PR adds LDAP-backed user and group autocomplete to the workspace sharing flow. The feature is well-scoped, optional-by-default (graceful degradation when unconfigured), and includes solid test coverage. The LDAP client uses proper filter escaping and input sanitization. A few issues need attention before merge: all three new handlers use the wrong variable for the auth nil check (deviating from the mandatory project pattern), and the frontend builds a custom combobox from scratch when a Shadcn equivalent exists.


Issues by Severity

Blocker Issues

None.


Critical Issues

1. Auth nil check uses wrong client variable in all 3 handlers

Files: components/backend/handlers/ldap.go lines 69-70, 99-100, 129-130

All three handlers discard reqK8s with _ and check k8sDyn (the dynamic client) for nil:

_, k8sDyn := GetK8sClientsForRequest(c)
if k8sDyn == nil {

The documented mandatory pattern across the entire codebase (backend-development.md, k8s-client-usage.md, security-standards.md) is to check the typed K8s client (reqK8s):

reqK8s, _ := GetK8sClientsForRequest(c)
if reqK8s == nil {
    c.JSON(http.StatusUnauthorized, gin.H{"error": "Invalid or missing token"})
    c.Abort()
    return
}

While functionally equivalent today (both return nil together), this breaks the established convention, hides the unused reqK8s, and creates a maintenance hazard if GetK8sClientsForRequest behavior ever diverges. All three handlers (SearchLDAPUsers, SearchLDAPGroups, GetLDAPUser) need this fix.

Violates: backend-development.md Critical Rules - Authentication and Authorization; k8s-client-usage.md Pattern 1.


Major Issues

2. Custom combobox dropdown built from scratch instead of Shadcn Command

File: components/frontend/src/components/workspace-sections/_components/ldap-autocomplete.tsx

The component renders a bare <ul> styled with Shadcn design tokens to simulate a dropdown. Shadcn ships a Command component (built on CMDK) designed exactly for async-populated autocomplete patterns, already available in the project's UI kit.

Per frontend-development.md Shadcn UI Components Only rule: FORBIDDEN: Creating custom UI components from scratch for buttons, inputs, dialogs, etc.

The Shadcn Popover + Command + CommandInput + CommandList pattern is the correct implementation path. It also provides accessibility (ARIA roles, keyboard navigation) for free rather than re-implementing manually.

3. New LDAP connection opened per search, no pooling

File: components/backend/ldap/client.go - SearchUsers, SearchGroups, GetUser

Each search dials a fresh TCP+TLS connection and closes it immediately via defer conn.Close(). Every cache miss creates a new LDAP connection. A user typing 5 characters generates up to 4 cold-connection searches (after the first 2-char trigger). At scale this adds meaningful latency and load on the LDAP server. Recommend persisting a connection with reconnect-on-error, or at minimum documenting this as a known limitation.

4. LDAP_GROUP_BASE_DN env var not mapped in deployment manifest

File: components/manifests/base/backend-deployment.yaml

main.go reads os.Getenv("LDAP_GROUP_BASE_DN") but the deployment manifest only maps LDAP_URL, LDAP_BASE_DN, and LDAP_SKIP_TLS_VERIFY from the ldap-config ConfigMap. Operators who need a non-default group base DN cannot configure it without manually patching the manifest.


Minor Issues

5. k8sDyn obtained but never used for K8s operations

File: components/backend/handlers/ldap.go lines 69, 99, 129

The dynamic client is obtained and only checked for nil, never used for actual K8s operations. Once Critical issue 1 is fixed, simplify to reqK8s, _ := GetK8sClientsForRequest(c).

6. Component exceeds 200-line guideline

File: components/frontend/src/components/workspace-sections/_components/ldap-autocomplete.tsx (252 lines)

frontend-development.md sets a 200-line limit. Much of the excess comes from duplicated render blocks for user vs group modes. Extracting a shared ResultItem sub-component would reduce size and eliminate the duplication.

7. Trivial JSDoc comments on self-evident functions

Files: components/frontend/src/services/api/ldap.ts, components/frontend/src/services/queries/use-ldap.ts

Comments like /** Search LDAP users by query (min 2 chars) */ add no information beyond the function name. Per CLAUDE.md: only add comments where logic is not self-evident.


Positive Highlights

  • Injection prevention is solid: sanitizeQuery strips newlines and carriage returns to block log injection, and goldap.EscapeFilter() is applied to every query before it enters an LDAP filter. The %q format verb in log lines ensures special chars are Go-quoted, not literal.
  • TLS hardened correctly: MinVersion: tls.VersionTLS12 is set, and InsecureSkipVerify has a proper nolint:gosec annotation noting it is dev-only and env-var-controlled.
  • Cache is thread-safe: sync.Map is the right primitive for concurrent read-heavy access, and TTL expiry is correctly lazy-evaluated on read.
  • Comprehensive unit tests: 9 test cases cover filter generation (special chars, multi-word, asterisk), DN parsing, query sanitization (newline stripping, truncation), cache TTL expiry, and short-query early-exit. Exactly the coverage this new LDAP package warrants.
  • React Query hooks follow the pattern precisely: enabled: query.length >= 2, staleTime set, hierarchical key factory (ldapKeys.userSearch(query)) correctly includes the query in the cache key with no collision risk.
  • Graceful degradation is well-implemented: optional: true on all ConfigMap refs, LDAPClient == nil guards in every handler, frontend falls back silently to no suggestions.
  • UX improvement in sharing-section.tsx: Clearing subjectName on subject-type switch prevents stale values carrying over between user/group modes.

Recommendations

  1. (Critical) Fix all three handlers: reqK8s, _ := GetK8sClientsForRequest(c) and check reqK8s == nil.
  2. (Major) Replace the custom <ul> dropdown with Shadcn Popover + Command + CommandList.
  3. (Major) Add LDAP_GROUP_BASE_DN to backend-deployment.yaml env mappings and ConfigMap specs.
  4. (Major) Document the per-request connection model as a known limitation, or add connection pooling.
  5. (Minor) Remove trivial JSDoc comments from ldap.ts and use-ldap.ts.
  6. (Minor) Extract a shared ResultItem component to bring ldap-autocomplete.tsx under 200 lines.

@mprpic mprpic force-pushed the ldap-autocomplete branch from 18b453a to 134aa71 Compare March 5, 2026 02:33
@github-actions

This comment has been minimized.

@mprpic mprpic force-pushed the ldap-autocomplete branch from 134aa71 to 5c38d48 Compare March 6, 2026 19:30
@mprpic
Copy link
Contributor Author

mprpic commented Mar 6, 2026

Disabling TLS verification in production is intentional for now until we switch to an authenticated LDAP server. The risk of doing this is minimal since the cluster is for authenticated users only and the risk of MITM is low.

@github-actions

This comment has been minimized.

@mprpic mprpic force-pushed the ldap-autocomplete branch from 5c38d48 to 2aad455 Compare March 6, 2026 20:45
@github-actions

This comment has been minimized.

mprpic and others added 3 commits March 10, 2026 11:13
…ace sharing

Integrate LDAP to provide autocomplete when sharing workspaces. The
backend queries LDAP with substring matching on uid, givenName, and sn
for users, and prefix matching on cn for groups. Results are cached
in-memory with a 5-minute TTL. The frontend replaces the plain text
input with an autocomplete dropdown that debounces queries and supports
keyboard navigation. LDAP is optional and gracefully degrades to manual
input when unconfigured or unreachable.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Martin Prpič <mprpic@redhat.com>
Register ldap.autocomplete.enabled in flags.json with the
scope:workspace tag so it appears in the workspace Feature Flags UI.
The backend LDAP endpoints check the flag via FeatureEnabled()
(fail-closed), and the frontend uses useFlag() from the Unleash SDK.
Workspace admins can toggle LDAP autocomplete per-workspace.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Martin Prpič <mprpic@redhat.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Martin Prpič <mprpic@redhat.com>
@mprpic mprpic force-pushed the ldap-autocomplete branch from 2aad455 to ebc132a Compare March 10, 2026 15:13
@coderabbitai
Copy link

coderabbitai bot commented Mar 10, 2026

Walkthrough

Adds LDAP integration: backend LDAP client with caching and search, HTTP handlers, frontend proxy routes and React Query hooks, an LDAP autocomplete React component integrated into workspace sharing behind a feature flag, and Kubernetes ConfigMaps plus deployment env vars for LDAP configuration.

Changes

Cohort / File(s) Summary
Backend LDAP Core
components/backend/ldap/client.go, components/backend/ldap/client_test.go
New LDAP client with TTL in-memory cache, user/group search and GetUser, DN parsing, GitHub username extraction, filters, and extensive unit tests for filters, parsing, and cache behavior.
Backend API Integration
components/backend/handlers/ldap.go, components/backend/routes.go, components/backend/main.go
Adds LDAP HTTP handlers (SearchLDAPUsers, SearchLDAPGroups, GetLDAPUser), registers routes, and conditionally initializes shared LDAP client from env vars.
Backend Dependencies & Build
components/backend/go.mod, Makefile
Adds three indirect Go module deps for LDAP/ASN.1/NTLM and updates Makefile to fetch admin password from a Kubernetes secret for the Unleash port-forward login display.
Frontend Proxy Routes
components/frontend/src/app/api/ldap/users/route.ts, components/frontend/src/app/api/ldap/users/[uid]/route.ts, components/frontend/src/app/api/ldap/groups/route.ts
New Next.js API routes that forward GET requests to backend LDAP endpoints, preserving headers and query string, with error handling and JSON passthrough.
Frontend API & Queries
components/frontend/src/services/api/ldap.ts, components/frontend/src/services/api/index.ts, components/frontend/src/services/queries/use-ldap.ts, components/frontend/src/services/queries/index.ts
Adds typed ldapApi (searchUsers, searchGroups, getUser), exported types, React Query key factory and hooks (user/group search and single-user) with min-2-char enable and 5-minute caching.
Frontend Autocomplete UI
components/frontend/src/components/workspace-sections/_components/ldap-autocomplete.tsx, components/frontend/src/components/workspace-sections/sharing-section.tsx, components/frontend/src/components/workspace-sections/__tests__/sharing-section.test.tsx
New LDAPAutocomplete component with debounced search, keyboard/mouse navigation, ARIA attributes; integrates into sharing UI behind ldap.autocomplete.enabled feature flag and adds a test mock for the flag.
Kubernetes Manifests & Flags
components/manifests/base/backend-deployment.yaml, components/manifests/base/flags.json, components/manifests/overlays/*/ldap-config.yaml, components/manifests/overlays/*/kustomization.yaml
Adds LDAP env vars to backend deployment (LDAP_URL, LDAP_BASE_DN, LDAP_GROUP_BASE_DN, LDAP_SKIP_TLS_VERIFY), new ldap-config ConfigMaps across overlays, and a new feature flag ldap.autocomplete.enabled.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant UI as LDAPAutocomplete
    participant ReactQuery as React Query
    participant Frontend as Frontend Route
    participant Backend as Backend Handler
    participant Client as LDAP Client
    participant LDAP as LDAP Server

    User->>UI: Type query
    UI->>UI: Debounce 300ms
    UI->>ReactQuery: request search(query)
    ReactQuery->>ReactQuery: check cache
    alt cache hit
        ReactQuery-->>UI: return cached results
    else
        ReactQuery->>Frontend: GET /api/ldap/...?q=query
        Frontend->>Backend: forward request + headers
        Backend->>Backend: auth & validate query
        Backend->>Client: SearchUsers/SearchGroups(query)
        Client->>Client: check local cache
        alt cache hit
            Client-->>Backend: return cached entries
        else
            Client->>LDAP: LDAP search with filter
            LDAP-->>Client: entries
            Client->>Client: parse & cache results
            Client-->>Backend: return entries
        end
        Backend-->>Frontend: 200 + JSON
        Frontend-->>ReactQuery: 200 + JSON
        ReactQuery-->>UI: results
    end
    UI->>User: render suggestions
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 45.71% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding LDAP user and group autocomplete for workspace sharing. It is specific, concise, and directly reflects the primary objective of the changeset.
Description check ✅ Passed The description is directly related to the changeset, providing clear implementation details about LDAP integration, caching strategy, frontend UI behavior, and graceful degradation. It accurately summarizes the changes across both backend and frontend components.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ldap-autocomplete

Comment @coderabbitai help to get the list of available commands and usage tips.

ktdreyer
ktdreyer previously approved these changes Mar 10, 2026
@github-actions
Copy link
Contributor

test connection

@github-actions
Copy link
Contributor

Ambient Code Platform

Kubernetes-native AI automation platform that orchestrates agentic sessions through containerized microservices. Built with Go (backend, operator), NextJS + Shadcn (frontend), Python (runner), and Kubernetes CRDs.

Technical artifacts still use "vteam" for backward compatibility.

Structure

  • components/backend/ - Go REST API (Gin), manages K8s Custom Resources with multi-tenant project isolation
  • components/frontend/ - NextJS web UI for session management and monitoring
  • components/operator/ - Go Kubernetes controller, watches CRDs and creates Jobs
  • components/runners/ambient-runner/ - Python runner executing Claude Code CLI in Job pods
  • components/ambient-cli/ - Go CLI (acpctl), manages agentic sessions from the command line
  • components/public-api/ - Stateless HTTP gateway, proxies to backend (no direct K8s access)
  • components/manifests/ - Kustomize-based deployment manifests and overlays
  • e2e/ - Cypress end-to-end tests
  • docs/ - Astro Starlight documentation site

Key Files

  • CRD definitions: components/manifests/base/crds/agenticsessions-crd.yaml, projectsettings-crd.yaml
  • Session lifecycle: components/backend/handlers/sessions.go, components/operator/internal/handlers/sessions.go
  • Auth & RBAC middleware: components/backend/handlers/middleware.go
  • K8s client init: components/operator/internal/config/config.go
  • Runner entry point: components/runners/ambient-runner/main.py
  • Route registration: components/backend/routes.go
  • Frontend API layer: components/frontend/src/services/api/, src/services/queries/

Session Flow

User Creates Session → Backend Creates CR → Operator Spawns Job →
Pod Runs Claude CLI → Results Stored in CR → UI Displays Progress

Commands

make build-all                # Build all container images
make deploy                   # Deploy to cluster
make test                     # Run tests
make lint                     # Lint code
make kind-up                  # Start local Kind cluster
make test-e2e-local           # Run E2E tests against Kind

Per-Component

# Backend / Operator (Go)
cd components/backend && gofmt -l . && go vet ./... && golangci-lint run
cd components/operator && gofmt -l . && go vet ./... && golangci-lint run

# Frontend
cd components/frontend && npm run build  # Must pass with 0 errors, 0 warnings

# Runner (Python)
cd components/runners/ambient-runner && uv venv && uv pip install -e .

# Docs
cd docs && npm run dev  # http://localhost:4321

Critical Context

  • User token auth required: All user-facing API ops use GetK8sClientsForRequest(c), never the backend service account
  • OwnerReferences on all child resources: Jobs, Secrets, PVCs must have controller owner refs
  • No panic() in production: Return explicit fmt.Errorf with context
  • No any types in frontend: Use proper types, unknown, or generic constraints
  • Conventional commits: Squashed on merge to main

Pre-commit Hooks

The project uses the pre-commit framework to run linters locally before every commit. Configuration lives in .pre-commit-config.yaml.

Install

make setup-hooks

What Runs

On every git commit:

Hook Scope
trailing-whitespace, end-of-file-fixer, check-yaml, check-added-large-files, check-merge-conflict, detect-private-key All files
ruff-format, ruff (check + fix) Python (runners, scripts)
gofmt, go vet, golangci-lint Go (backend, operator, public-api — per-module)
eslint Frontend TypeScript/JavaScript
branch-protection Blocks commits to main/master/production

On every git push:

Hook Scope
push-protection Blocks pushes to main/master/production

Run Manually

make lint                                    # All hooks, all files
pre-commit run gofmt-check --all-files       # Single hook
pre-commit run --files path/to/file.go       # Single file

Skip Hooks

git commit --no-verify    # Skip pre-commit hooks
git push --no-verify      # Skip pre-push hooks

Notes

  • Go and ESLint wrappers (scripts/pre-commit/) skip gracefully if the toolchain is not installed
  • tsc --noEmit and npm run build are not included (slow; CI gates on them)
  • Branch/push protection scripts remain in scripts/git-hooks/ and are invoked by pre-commit

Testing

  • Frontend unit tests: cd components/frontend && npx vitest run --coverage (466 tests, ~74% coverage). See components/frontend/README.md.
  • E2E tests: cd e2e && npx cypress run --browser chrome (58 tests, mock SDK). See e2e/README.md.
  • Runner tests: cd components/runners/ambient-runner && python -m pytest tests/
  • Backend tests: cd components/backend && make test. See components/backend/TEST_GUIDE.md.

More Info

See BOOKMARKS.md for architecture decisions, development context, code patterns, and component-specific guides.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (1)
components/manifests/overlays/production/ldap-config.yaml (1)

8-10: ⚠️ Potential issue | 🟠 Major

Do not ship the production overlay with LDAPS verification disabled.

Line 10 sets LDAP_SKIP_TLS_VERIFY: "true", which makes the backend trust spoofed LDAPS endpoints. Please wire in the corporate CA/trusted bundle and keep verification enabled before rolling this out in production.

Minimal manifest change once the CA bundle is wired in
-  LDAP_SKIP_TLS_VERIFY: "true"
+  LDAP_SKIP_TLS_VERIFY: "false"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/manifests/overlays/production/ldap-config.yaml` around lines 8 -
10, Replace the insecure LDAP_SKIP_TLS_VERIFY=true with TLS verification enabled
and wire in the corporate CA bundle: add an env var like LDAP_CA_BUNDLE (or
LDAP_TRUSTED_CA) and populate it from the corporate CA secret/configMap, ensure
the deployment mounts the CA bundle into the pod and the LDAP client (code that
reads LDAP_BASE_DN/LDAP_URL) is configured to use LDAP_CA_BUNDLE for TLS
verification, and set LDAP_SKIP_TLS_VERIFY to "false" (or remove it) so LDAPS
verification remains enabled before shipping production.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@components/backend/go.mod`:
- Line 39: Run the verification commands shown to check whether any Go source in
the backend imports github.com/go-ldap/ldap/v3; if it does, remove the //
indirect marker and promote github.com/go-ldap/ldap/v3 to a direct require in
go.mod by adding a require entry for that module (same version currently listed)
and then run go mod tidy to update the module graph; if no direct imports exist,
leave the dependency as indirect. Ensure you reference the module name
"github.com/go-ldap/ldap/v3" and update the go.mod require block accordingly,
then run Go tooling (go mod tidy) to validate and commit the change.

In `@components/backend/handlers/ldap.go`:
- Around line 37-44: Change all LDAP error responses to use HTTP 503 Service
Unavailable for consistency with external dependency failures: update the
handlers that call LDAPClient.SearchUsers and LDAPClient.SearchGroups to return
c.JSON(http.StatusServiceUnavailable, ...) on err (instead of 500), and modify
the GetLDAPUser handler to return http.StatusServiceUnavailable when LDAP lookup
fails; ensure the error log messages remain unchanged and only the HTTP status
code is adjusted.
- Around line 18-24: The error returned from GetK8sClientsForRequest is being
discarded in SearchLDAPUsers (and likewise in SearchLDAPGroups and GetLDAPUser);
modify each function to capture the returned error and log it before returning
the 401 response — e.g., call your existing logger (or c.Error()/log.Printf)
with a clear message like "GetK8sClientsForRequest failed" plus the error value
when reqK8s is nil; ensure you do this in the SearchLDAPUsers, SearchLDAPGroups,
and GetLDAPUser functions where GetK8sClientsForRequest is called.

In `@components/backend/ldap/client_test.go`:
- Around line 233-257: The TestCacheTTL test uses a very small 10ms margin
(client.cacheTTL = 50ms, Sleep 60ms) which can flake under CI; either increase
the margin in TestCacheTTL (e.g., set cacheTTL to 200ms and Sleep to 300ms) or
refactor Client to accept a clock/time provider (inject a timeNow/Clock used by
cacheGet/cacheSet expiration logic) so tests can use a fake clock—update the
Client struct (and its cacheGet/cacheSet expiration checks) to read from the
injected clock and adjust TestCacheTTL to use the fake clock or a larger sleep
margin to eliminate flakiness.

In `@components/backend/ldap/client.go`:
- Around line 96-120: The cache currently only evicts expired entries on access
(cacheGet) so c.cache (sync.Map) can grow unbounded; add a background janitor
goroutine that periodically ranges over c.cache, type-asserts values to
cacheEntry, and deletes entries whose expiresAt is before time.Now(), using
c.cache.Delete, and start this janitor when the Client is created (e.g. in the
constructor or an init method) and stop it when the Client is closed; reference
cacheGet, cacheSet, cacheEntry, c.cache and c.cacheTTL when implementing the
interval and eviction logic.
- Around line 318-326: sanitizeQuery currently truncates by bytes using
q[:maxQueryLength], which can split multi-byte UTF-8 characters; change
truncation to be rune-aware by converting q to a rune slice and trimming like:
runes := []rune(q); if len(runes) > maxQueryLength { q =
string(runes[:maxQueryLength]) }; keep the existing TrimSpace and Replace steps
and reference the sanitizeQuery function and maxQueryLength constant when making
this change.

In `@components/backend/main.go`:
- Around line 141-144: The code currently falls back to using LDAP_BASE_DN for
LDAP_GROUP_BASE_DN which causes group searches to run under the users OU; change
the logic around handlers.LDAPClient creation so that LDAP_GROUP_BASE_DN is not
defaulted to LDAP_BASE_DN — read LDAP_GROUP_BASE_DN via os.Getenv and if empty
pass an empty string (or a sentinel) to ldap.NewClient and ensure
group-autocomplete is disabled when LDAP_GROUP_BASE_DN is empty instead of
searching under LDAP_BASE_DN; update ldap.NewClient (and any code using its
groupBaseDN parameter) or the caller to treat an empty groupBaseDN as “groups
disabled” rather than falling back to getEnvOrDefault("LDAP_BASE_DN", ...) or
reusing ldapBaseDN.

In `@components/frontend/src/app/api/ldap/users/route.ts`:
- Around line 9-12: The fetch call that proxies to
`${BACKEND_URL}/ldap/users${queryString}` can hang indefinitely; wrap it with an
AbortController and a short timeout (e.g., 2–5s) so the request is aborted when
the timer elapses, pass controller.signal to fetch (alongside existing headers
and method), and clear the timeout on success; apply the same pattern to the
sibling LDAP proxy routes that use BACKEND_URL, queryString, headers, and the
route handler(s) to ensure graceful degradation.

In `@components/manifests/overlays/kind/ldap-config.yaml`:
- Around line 8-10: The manifest currently disables server TLS verification via
LDAP_SKIP_TLS_VERIFY: "true"; change LDAP_SKIP_TLS_VERIFY to "false" in the YAML
and instead provision the LDAP CA into the runtime trust store for the container
image (for example by adding the CA to the image or mounting a ConfigMap/Secret
into the container's system trust directory and running the appropriate
trust-update step) so that the LDAPS endpoint at LDAP_URL and base LDAP_BASE_DN
are validated; ensure any container startup hooks or Dockerfile layers invoke
the platform's CA import command (e.g., update-ca-trust/update-ca-certificates)
so verification succeeds without skipping TLS.

---

Duplicate comments:
In `@components/manifests/overlays/production/ldap-config.yaml`:
- Around line 8-10: Replace the insecure LDAP_SKIP_TLS_VERIFY=true with TLS
verification enabled and wire in the corporate CA bundle: add an env var like
LDAP_CA_BUNDLE (or LDAP_TRUSTED_CA) and populate it from the corporate CA
secret/configMap, ensure the deployment mounts the CA bundle into the pod and
the LDAP client (code that reads LDAP_BASE_DN/LDAP_URL) is configured to use
LDAP_CA_BUNDLE for TLS verification, and set LDAP_SKIP_TLS_VERIFY to "false" (or
remove it) so LDAPS verification remains enabled before shipping production.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: bdfceffd-15bd-4f4d-b7a3-09443dc23b7b

📥 Commits

Reviewing files that changed from the base of the PR and between 07d50e5 and ebc132a.

⛔ Files ignored due to path filters (1)
  • components/backend/go.sum is excluded by !**/*.sum
📒 Files selected for processing (24)
  • Makefile
  • components/backend/go.mod
  • components/backend/handlers/ldap.go
  • components/backend/ldap/client.go
  • components/backend/ldap/client_test.go
  • components/backend/main.go
  • components/backend/routes.go
  • components/frontend/src/app/api/ldap/groups/route.ts
  • components/frontend/src/app/api/ldap/users/[uid]/route.ts
  • components/frontend/src/app/api/ldap/users/route.ts
  • components/frontend/src/components/workspace-sections/_components/ldap-autocomplete.tsx
  • components/frontend/src/components/workspace-sections/sharing-section.tsx
  • components/frontend/src/services/api/index.ts
  • components/frontend/src/services/api/ldap.ts
  • components/frontend/src/services/queries/index.ts
  • components/frontend/src/services/queries/use-ldap.ts
  • components/manifests/base/backend-deployment.yaml
  • components/manifests/base/flags.json
  • components/manifests/overlays/kind/kustomization.yaml
  • components/manifests/overlays/kind/ldap-config.yaml
  • components/manifests/overlays/local-dev/kustomization.yaml
  • components/manifests/overlays/local-dev/ldap-config.yaml
  • components/manifests/overlays/production/kustomization.yaml
  • components/manifests/overlays/production/ldap-config.yaml

github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 // indirect
github.com/go-ldap/ldap/v3 v3.4.12 // indirect
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Promote github.com/go-ldap/ldap/v3 to a direct requirement if the new LDAP client imports it.

If any backend file imports this module directly, keeping Line 39 as // indirect just creates churn the next time go mod tidy runs. If it is only transitive, ignore this.

Verify whether the dependency is directly imported
#!/bin/bash
set -euo pipefail

rg -n --type=go 'github\.com/go-ldap/ldap/v3' components/backend
rg -n 'github\.com/go-ldap/ldap/v3' components/backend/go.mod
Minimal fix if the import is direct
-	github.com/go-ldap/ldap/v3 v3.4.12 // indirect
+	github.com/go-ldap/ldap/v3 v3.4.12
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
github.com/go-ldap/ldap/v3 v3.4.12 // indirect
github.com/go-ldap/ldap/v3 v3.4.12
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/backend/go.mod` at line 39, Run the verification commands shown to
check whether any Go source in the backend imports github.com/go-ldap/ldap/v3;
if it does, remove the // indirect marker and promote github.com/go-ldap/ldap/v3
to a direct require in go.mod by adding a require entry for that module (same
version currently listed) and then run go mod tidy to update the module graph;
if no direct imports exist, leave the dependency as indirect. Ensure you
reference the module name "github.com/go-ldap/ldap/v3" and update the go.mod
require block accordingly, then run Go tooling (go mod tidy) to validate and
commit the change.

Comment on lines +18 to +24
func SearchLDAPUsers(c *gin.Context) {
reqK8s, _ := GetK8sClientsForRequest(c)
if reqK8s == nil {
c.JSON(http.StatusUnauthorized, gin.H{"error": "Invalid or missing token"})
c.Abort()
return
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider logging the discarded error from GetK8sClientsForRequest.

The error returned by GetK8sClientsForRequest is silently discarded. While returning 401 is correct, logging the error would aid debugging authentication issues.

♻️ Proposed fix to log auth errors
 func SearchLDAPUsers(c *gin.Context) {
-	reqK8s, _ := GetK8sClientsForRequest(c)
+	reqK8s, err := GetK8sClientsForRequest(c)
 	if reqK8s == nil {
+		if err != nil {
+			log.Printf("LDAP auth error: %v", err)
+		}
 		c.JSON(http.StatusUnauthorized, gin.H{"error": "Invalid or missing token"})
 		c.Abort()
 		return
 	}

Apply the same pattern to SearchLDAPGroups and GetLDAPUser.

Also applies to: 48-54, 78-84

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/backend/handlers/ldap.go` around lines 18 - 24, The error returned
from GetK8sClientsForRequest is being discarded in SearchLDAPUsers (and likewise
in SearchLDAPGroups and GetLDAPUser); modify each function to capture the
returned error and log it before returning the 401 response — e.g., call your
existing logger (or c.Error()/log.Printf) with a clear message like
"GetK8sClientsForRequest failed" plus the error value when reqK8s is nil; ensure
you do this in the SearchLDAPUsers, SearchLDAPGroups, and GetLDAPUser functions
where GetK8sClientsForRequest is called.

Comment on lines +37 to +44
users, err := LDAPClient.SearchUsers(query)
if err != nil {
log.Printf("LDAP user search error for query %q: %v", query, err)
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "LDAP search unavailable"})
return
}

c.JSON(http.StatusOK, gin.H{"users": users})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Inconsistent HTTP status codes for LDAP errors.

SearchUsers/SearchGroups return 503 Service Unavailable on LDAP errors, while GetLDAPUser returns 500 Internal Server Error. Consider aligning these for consistency—503 is more appropriate since LDAP is an external dependency.

♻️ Proposed fix for consistency
 	user, err := LDAPClient.GetUser(uid)
 	if err != nil {
 		log.Printf("LDAP user get error for uid %q: %v", uid, err)
-		c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to look up user"})
+		c.JSON(http.StatusServiceUnavailable, gin.H{"error": "LDAP lookup unavailable"})
 		return
 	}

Also applies to: 67-74, 97-102

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/backend/handlers/ldap.go` around lines 37 - 44, Change all LDAP
error responses to use HTTP 503 Service Unavailable for consistency with
external dependency failures: update the handlers that call
LDAPClient.SearchUsers and LDAPClient.SearchGroups to return
c.JSON(http.StatusServiceUnavailable, ...) on err (instead of 500), and modify
the GetLDAPUser handler to return http.StatusServiceUnavailable when LDAP lookup
fails; ensure the error log messages remain unchanged and only the HTTP status
code is adjusted.

Comment on lines +233 to +257
func TestCacheTTL(t *testing.T) {
client := &Client{
cacheTTL: 50 * time.Millisecond,
}

client.cacheSet("test-key", "test-value")

// Should be in cache
val, ok := client.cacheGet("test-key")
if !ok {
t.Fatal("expected cache hit")
}
if val.(string) != "test-value" {
t.Errorf("expected 'test-value', got %v", val)
}

// Wait for expiry
time.Sleep(60 * time.Millisecond)

// Should be expired
_, ok = client.cacheGet("test-key")
if ok {
t.Fatal("expected cache miss after TTL")
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Cache TTL test may be flaky under CI load.

The 10ms margin between TTL (50ms) and sleep (60ms) could cause intermittent failures on slow/loaded CI runners. Consider using a time abstraction or increasing the margin.

♻️ Potential improvement for test stability
 func TestCacheTTL(t *testing.T) {
 	client := &Client{
-		cacheTTL: 50 * time.Millisecond,
+		cacheTTL: 20 * time.Millisecond,
 	}
 
 	client.cacheSet("test-key", "test-value")
 
 	// Should be in cache
 	val, ok := client.cacheGet("test-key")
 	if !ok {
 		t.Fatal("expected cache hit")
 	}
 	if val.(string) != "test-value" {
 		t.Errorf("expected 'test-value', got %v", val)
 	}
 
 	// Wait for expiry
-	time.Sleep(60 * time.Millisecond)
+	time.Sleep(50 * time.Millisecond)
 
 	// Should be expired
 	_, ok = client.cacheGet("test-key")
 	if ok {
 		t.Fatal("expected cache miss after TTL")
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func TestCacheTTL(t *testing.T) {
client := &Client{
cacheTTL: 50 * time.Millisecond,
}
client.cacheSet("test-key", "test-value")
// Should be in cache
val, ok := client.cacheGet("test-key")
if !ok {
t.Fatal("expected cache hit")
}
if val.(string) != "test-value" {
t.Errorf("expected 'test-value', got %v", val)
}
// Wait for expiry
time.Sleep(60 * time.Millisecond)
// Should be expired
_, ok = client.cacheGet("test-key")
if ok {
t.Fatal("expected cache miss after TTL")
}
}
func TestCacheTTL(t *testing.T) {
client := &Client{
cacheTTL: 20 * time.Millisecond,
}
client.cacheSet("test-key", "test-value")
// Should be in cache
val, ok := client.cacheGet("test-key")
if !ok {
t.Fatal("expected cache hit")
}
if val.(string) != "test-value" {
t.Errorf("expected 'test-value', got %v", val)
}
// Wait for expiry
time.Sleep(50 * time.Millisecond)
// Should be expired
_, ok = client.cacheGet("test-key")
if ok {
t.Fatal("expected cache miss after TTL")
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/backend/ldap/client_test.go` around lines 233 - 257, The
TestCacheTTL test uses a very small 10ms margin (client.cacheTTL = 50ms, Sleep
60ms) which can flake under CI; either increase the margin in TestCacheTTL
(e.g., set cacheTTL to 200ms and Sleep to 300ms) or refactor Client to accept a
clock/time provider (inject a timeNow/Clock used by cacheGet/cacheSet expiration
logic) so tests can use a fake clock—update the Client struct (and its
cacheGet/cacheSet expiration checks) to read from the injected clock and adjust
TestCacheTTL to use the fake clock or a larger sleep margin to eliminate
flakiness.

Comment on lines +96 to +120
// cacheGet returns a cached value if it exists and hasn't expired.
func (c *Client) cacheGet(key string) (any, bool) {
val, ok := c.cache.Load(key)
if !ok {
return nil, false
}
entry, ok := val.(cacheEntry)
if !ok {
c.cache.Delete(key)
return nil, false
}
if time.Now().After(entry.expiresAt) {
c.cache.Delete(key)
return nil, false
}
return entry.value, true
}

// cacheSet stores a value in the cache with TTL.
func (c *Client) cacheSet(key string, value any) {
c.cache.Store(key, cacheEntry{
value: value,
expiresAt: time.Now().Add(c.cacheTTL),
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Cache entries are never proactively cleaned; memory may grow unbounded.

Expired entries are only removed on access (cacheGet). If unique queries accumulate without being re-accessed, the sync.Map grows indefinitely. Consider adding periodic cleanup or using a bounded LRU cache.

🛠️ Suggested approach

Options to address this:

  1. Add a background goroutine that periodically scans and evicts expired entries
  2. Use a bounded cache library like hashicorp/golang-lru or patrickmn/go-cache
  3. Accept the risk if query cardinality is expected to be low

Given the 5-minute TTL and typical usage patterns, this may be acceptable for now, but worth monitoring memory usage in production.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/backend/ldap/client.go` around lines 96 - 120, The cache currently
only evicts expired entries on access (cacheGet) so c.cache (sync.Map) can grow
unbounded; add a background janitor goroutine that periodically ranges over
c.cache, type-asserts values to cacheEntry, and deletes entries whose expiresAt
is before time.Now(), using c.cache.Delete, and start this janitor when the
Client is created (e.g. in the constructor or an init method) and stop it when
the Client is closed; reference cacheGet, cacheSet, cacheEntry, c.cache and
c.cacheTTL when implementing the interval and eviction logic.

Comment on lines +318 to +326
// sanitizeQuery cleans and truncates a search query.
func sanitizeQuery(q string) string {
q = strings.TrimSpace(q)
q = strings.NewReplacer("\n", "", "\r", "").Replace(q)
if len(q) > maxQueryLength {
q = q[:maxQueryLength]
}
return q
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

UTF-8 truncation may split multi-byte characters.

q[:maxQueryLength] slices at byte position 50, which can split a multi-byte UTF-8 character (e.g., emoji or non-ASCII names), producing invalid UTF-8 and potentially unexpected LDAP filter behavior.

🔧 Proposed fix using rune-aware truncation
 func sanitizeQuery(q string) string {
 	q = strings.TrimSpace(q)
 	q = strings.NewReplacer("\n", "", "\r", "").Replace(q)
-	if len(q) > maxQueryLength {
-		q = q[:maxQueryLength]
+	if len([]rune(q)) > maxQueryLength {
+		q = string([]rune(q)[:maxQueryLength])
 	}
 	return q
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// sanitizeQuery cleans and truncates a search query.
func sanitizeQuery(q string) string {
q = strings.TrimSpace(q)
q = strings.NewReplacer("\n", "", "\r", "").Replace(q)
if len(q) > maxQueryLength {
q = q[:maxQueryLength]
}
return q
}
// sanitizeQuery cleans and truncates a search query.
func sanitizeQuery(q string) string {
q = strings.TrimSpace(q)
q = strings.NewReplacer("\n", "", "\r", "").Replace(q)
if len([]rune(q)) > maxQueryLength {
q = string([]rune(q)[:maxQueryLength])
}
return q
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/backend/ldap/client.go` around lines 318 - 326, sanitizeQuery
currently truncates by bytes using q[:maxQueryLength], which can split
multi-byte UTF-8 characters; change truncation to be rune-aware by converting q
to a rune slice and trimming like: runes := []rune(q); if len(runes) >
maxQueryLength { q = string(runes[:maxQueryLength]) }; keep the existing
TrimSpace and Replace steps and reference the sanitizeQuery function and
maxQueryLength constant when making this change.

Comment on lines +141 to +144
ldapBaseDN := getEnvOrDefault("LDAP_BASE_DN", "ou=users,dc=redhat,dc=com")
ldapGroupBaseDN := os.Getenv("LDAP_GROUP_BASE_DN") // optional, derived from LDAP_BASE_DN if empty
skipTLS := os.Getenv("LDAP_SKIP_TLS_VERIFY") == "true"
handlers.LDAPClient = ldap.NewClient(ldapURL, ldapBaseDN, ldapGroupBaseDN, skipTLS)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't reuse the users OU for group search.

When LDAP_GROUP_BASE_DN is unset, the client falls back to LDAP_BASE_DN. With the current overlays setting LDAP_BASE_DN to ou=users,..., /ldap/groups will search inside the users OU and typically return nothing. Require an explicit group base DN, or disable group autocomplete when it isn't configured.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/backend/main.go` around lines 141 - 144, The code currently falls
back to using LDAP_BASE_DN for LDAP_GROUP_BASE_DN which causes group searches to
run under the users OU; change the logic around handlers.LDAPClient creation so
that LDAP_GROUP_BASE_DN is not defaulted to LDAP_BASE_DN — read
LDAP_GROUP_BASE_DN via os.Getenv and if empty pass an empty string (or a
sentinel) to ldap.NewClient and ensure group-autocomplete is disabled when
LDAP_GROUP_BASE_DN is empty instead of searching under LDAP_BASE_DN; update
ldap.NewClient (and any code using its groupBaseDN parameter) or the caller to
treat an empty groupBaseDN as “groups disabled” rather than falling back to
getEnvOrDefault("LDAP_BASE_DN", ...) or reusing ldapBaseDN.

Comment on lines +9 to +12
const response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, {
method: 'GET',
headers,
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bound the backend proxy call.

This route currently waits indefinitely for the backend/LDAP side. If that dependency hangs instead of erroring, the Next.js request thread hangs too and the graceful-degradation path never triggers. Add a short timeout/abort here, and mirror the same fix in the sibling LDAP proxy routes.

⏱️ Example fix
-    const response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, {
-      method: 'GET',
-      headers,
-    });
+    const controller = new AbortController();
+    const timeoutId = setTimeout(() => controller.abort(), 5000);
+    let response: Response;
+    try {
+      response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, {
+        method: 'GET',
+        headers,
+        signal: controller.signal,
+      });
+    } finally {
+      clearTimeout(timeoutId);
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, {
method: 'GET',
headers,
});
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 5000);
let response: Response;
try {
response = await fetch(`${BACKEND_URL}/ldap/users${queryString}`, {
method: 'GET',
headers,
signal: controller.signal,
});
} finally {
clearTimeout(timeoutId);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/frontend/src/app/api/ldap/users/route.ts` around lines 9 - 12, The
fetch call that proxies to `${BACKEND_URL}/ldap/users${queryString}` can hang
indefinitely; wrap it with an AbortController and a short timeout (e.g., 2–5s)
so the request is aborted when the timer elapses, pass controller.signal to
fetch (alongside existing headers and method), and clear the timeout on success;
apply the same pattern to the sibling LDAP proxy routes that use BACKEND_URL,
queryString, headers, and the route handler(s) to ensure graceful degradation.

Comment on lines +8 to +10
LDAP_URL: "ldaps://ldap.corp.redhat.com"
LDAP_BASE_DN: "ou=users,dc=redhat,dc=com"
LDAP_SKIP_TLS_VERIFY: "true"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Re-enable LDAPS certificate verification.

LDAP_SKIP_TLS_VERIFY: "true" disables server authentication for a real shared LDAP endpoint, so a network intermediary can impersonate the directory and feed forged autocomplete results into the sharing flow. If the cluster does not yet trust the issuing CA, ship that CA into the container trust store instead of turning verification off.

🧰 Tools
🪛 Checkov (3.2.334)

[low] 1-10: The default namespace should not be used

(CKV_K8S_21)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/manifests/overlays/kind/ldap-config.yaml` around lines 8 - 10, The
manifest currently disables server TLS verification via LDAP_SKIP_TLS_VERIFY:
"true"; change LDAP_SKIP_TLS_VERIFY to "false" in the YAML and instead provision
the LDAP CA into the runtime trust store for the container image (for example by
adding the CA to the image or mounting a ConfigMap/Secret into the container's
system trust directory and running the appropriate trust-update step) so that
the LDAPS endpoint at LDAP_URL and base LDAP_BASE_DN are validated; ensure any
container startup hooks or Dockerfile layers invoke the platform's CA import
command (e.g., update-ca-trust/update-ca-certificates) so verification succeeds
without skipping TLS.

The feature flag hook added in 513b5ae requires a QueryClientProvider,
which the test harness doesn't supply. Mock it to unblock CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Martin Prpič <mprpic@redhat.com>
@github-actions
Copy link
Contributor

Review posted - see full analysis below in a follow-up comment.

@github-actions
Copy link
Contributor

CRITICAL: TLS verification disabled in production LDAP config (components/manifests/overlays/production/ldap-config.yaml:9 sets LDAP_SKIP_TLS_VERIFY to true). This exposes the ldaps:// connection to MITM attacks. Fix: set to false or remove the key in production overlay. The dev overlays can keep it. Standard: security-standards.md.

@github-actions
Copy link
Contributor

MAJOR: Custom autocomplete dropdown instead of Shadcn Combobox pattern. File: ldap-autocomplete.tsx. The dropdown uses raw ul/li elements with hand-rolled focus, keyboard navigation, and click-outside logic. Shadcn ships Command + Popover Combobox for exactly this use case. Standard violated: frontend-development.md Shadcn UI Components Only rule (FORBIDDEN: Creating custom UI components from scratch). Fix: replace with Popover, PopoverContent, PopoverTrigger, Command, CommandInput, CommandList, CommandItem, CommandEmpty. Bonus: the component is also 252 lines (limit is 200); switching to Shadcn Combobox will bring it under the limit. Standard: frontend-development.md.

@github-actions
Copy link
Contributor

MINOR issues (4): (1) Inconsistent HTTP status codes: SearchLDAPUsers/Groups return 503 but GetLDAPUser returns 500 for the same failure class — standardise on 503 (handlers/ldap.go:104 vs :164). (2) Hardcoded org domain fallback in NewClient: the fallback ou=managedGroups,dc=redhat,dc=com bakes a Red Hat-specific domain into a generic library — log a warning instead (ldap/client.go:249). (3) useLDAPUser hook and ldapApi.getUser are exported but not used anywhere in this PR — remove or document intent. (4) autocomplete component has no error state: the useEffect checks isLoading/hasResults but not error, so LDAP query failures silently close the dropdown — render a brief message when error is set.

@github-actions
Copy link
Contributor

POSITIVE HIGHLIGHTS: (1) Excellent LDAP injection prevention — goldap.EscapeFilter() used in all filter builders, sanitizeQuery() strips newlines, %q format verb throughout error logs. (2) Auth pattern correct throughout — every handler uses GetK8sClientsForRequest(c) with 401 on nil, per k8s-client-usage.md. (3) Graceful degradation — LDAP optional (LDAP_URL env gated), Unleash-flag gated in frontend, sharing form falls back to plain Input. (4) Cache TTL consistent end-to-end — backend 5 min, frontend staleTime 5 min. (5) Solid test coverage — filters, TTL expiry, sanitizeQuery, ParseGitHubUsername, short-query cases all tested. (6) LDAPResultSizeLimitExceeded treated as non-fatal partial result — correct for autocomplete. (7) Makefile Unleash password fix replaces hardcoded default with kubectl secret lookup.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@components/frontend/src/components/workspace-sections/__tests__/sharing-section.test.tsx`:
- Around line 29-31: The test currently forces useWorkspaceFlag() to always
return disabled which prevents rendering the LDAP-autocomplete branch in
SharingSection; change the mock so tests can control the flag per-case by
exporting/using a mutable mock function (e.g., vi.fn()) for useWorkspaceFlag and
set its return value inside individual tests or beforeEach/within specific test
cases; locate the vi.mock for use-feature-flags-admin and replace the fixed
return with a configurable mock (referencing useWorkspaceFlag and
SharingSection) so tests can call mockReturnValue or mockImplementation to
enable/disable the feature as needed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 2c6fddcf-913d-433a-9c5c-1bf1db00e25f

📥 Commits

Reviewing files that changed from the base of the PR and between ebc132a and caf55f0.

📒 Files selected for processing (1)
  • components/frontend/src/components/workspace-sections/__tests__/sharing-section.test.tsx

Comment on lines +29 to +31
vi.mock('@/services/queries/use-feature-flags-admin', () => ({
useWorkspaceFlag: vi.fn(() => ({ enabled: false, isLoading: false })),
}));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make the feature-flag mock configurable instead of forcing the fallback path.

This locks useWorkspaceFlag() to disabled for the whole suite, so these tests never render the new LDAP autocomplete branch in SharingSection. The PR’s main UI path can regress without this file catching it.

Suggested test setup
 const mockAddMutate = vi.fn();
 const mockRemoveMutate = vi.fn();
 const mockRefetch = vi.fn();
+const mockUseWorkspaceFlag = vi.fn();

 vi.mock('@/services/queries/use-feature-flags-admin', () => ({
-  useWorkspaceFlag: vi.fn(() => ({ enabled: false, isLoading: false })),
+  useWorkspaceFlag: mockUseWorkspaceFlag,
 }));

 describe('SharingSection', () => {
   beforeEach(() => {
     vi.clearAllMocks();
+    mockUseWorkspaceFlag.mockReturnValue({ enabled: false, isLoading: false });
   });
+
+  it('renders the autocomplete flow when the workspace flag is enabled', () => {
+    mockUseWorkspaceFlag.mockReturnValue({ enabled: true, isLoading: false });
+
+    render(<SharingSection projectName="test-project" />);
+    fireEvent.click(screen.getByText('Grant Permission'));
+
+    expect(screen.getByRole('combobox')).toBeDefined();
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@components/frontend/src/components/workspace-sections/__tests__/sharing-section.test.tsx`
around lines 29 - 31, The test currently forces useWorkspaceFlag() to always
return disabled which prevents rendering the LDAP-autocomplete branch in
SharingSection; change the mock so tests can control the flag per-case by
exporting/using a mutable mock function (e.g., vi.fn()) for useWorkspaceFlag and
set its return value inside individual tests or beforeEach/within specific test
cases; locate the vi.mock for use-feature-flags-admin and replace the fixed
return with a configurable mock (referencing useWorkspaceFlag and
SharingSection) so tests can call mockReturnValue or mockImplementation to
enable/disable the feature as needed.

@github-actions
Copy link
Contributor

Claude Code Review

Summary

This PR adds LDAP-backed autocomplete for user and group selection in the workspace sharing dialog. The implementation is well-scoped: a new Go LDAP package with in-memory caching, three authenticated API endpoints, Next.js proxy routes, and a React Query-powered frontend. Graceful degradation (LDAP optional, feature-flagged) is handled correctly throughout. However, a production manifest with TLS verification disabled and a frontend component that bypasses Shadcn UI conventions need to be addressed.


Issues by Severity

Blocker Issues

None

Critical Issues

1. Production manifest ships with TLS verification disabled

components/manifests/overlays/production/ldap-config.yaml sets LDAP_SKIP_TLS_VERIFY: "true". This disables X.509 certificate verification on the LDAPS connection, making production traffic vulnerable to MITM attacks. The nolint comment in ldap/client.go already calls this out as dev-only:

InsecureSkipVerify: c.skipTLSVerify, //nolint:gosec // controlled by LDAP_SKIP_TLS_VERIFY env var for dev

Setting it "true" in kind and local-dev is fine. In production it must be "false" or the key omitted entirely (Go zero value for bool is false).

Violates: security-standards.md — no credential/transport bypass in production.


2. LDAPAutocomplete builds a custom dropdown instead of Shadcn Command/Combobox

ldap-autocomplete.tsx renders a hand-rolled <ul> dropdown with manual keyboard navigation and highlight tracking. Shadcn ships a Command component (built on cmdk) that provides exactly this pattern. Frontend standards are zero-tolerance here:

FORBIDDEN: Creating custom UI components from scratch for buttons, inputs, dialogs, etc.
REQUIRED: Use @/components/ui/* components

The correct approach is Popover + Command + CommandInput + CommandList + CommandItem, which also provides built-in keyboard navigation, ARIA semantics, and theming. The component is also 252 lines, exceeding the 200-line pre-commit limit.

Violates: frontend-development.md — Shadcn UI components only (zero tolerance).


Major Issues

3. In-memory cache has no active eviction

ldap/client.go — the sync.Map-based cache uses lazy expiry: stale entries are only removed when cacheGet is called for that exact key again. Entries for queries never repeated stay in memory indefinitely. In a long-running backend serving many distinct autocomplete queries, the cache will grow without bound. Recommend a periodic sweeper in NewClient:

go func() {
    ticker := time.NewTicker(c.cacheTTL)
    defer ticker.Stop()
    for range ticker.C {
        c.cache.Range(func(key, val any) bool {
            if e, ok := val.(cacheEntry); ok && time.Now().After(e.expiresAt) {
                c.cache.Delete(key)
            }
            return true
        })
    }
}()

Violates: backend-development.md — performance — unbounded list operations.


Minor Issues

4. useWorkspaceFlag isLoading not handled — brief UI flash

sharing-section.tsx:37 only destructures enabled from useWorkspaceFlag. While the flag is loading, ldapEnabled is false, so the plain <Input> renders first and then switches to <LDAPAutocomplete> once resolved. Destructure and use isLoading to disable the input while loading.

Violates: frontend-development.md — loading states must be handled.

5. LDAP startup log reveals deployment configuration at INFO level

main.go:144 logs the full LDAP URL, base DNs, and TLS skip setting at INFO on startup. These are not secrets, but verbose operational detail. A simple log.Println("LDAP client initialized") is preferable.


Positive Highlights

  • Excellent input sanitization: sanitizeQuery strips newlines (preventing log injection) and truncates to maxQueryLength. goldap.EscapeFilter() is used correctly throughout all filter builders — no LDAP injection risk.
  • Graceful degradation is well-implemented: LDAPClient == nil returns empty arrays (not errors) for search, and the feature flag gates the UI correctly. No panics if LDAP is unreachable.
  • User token auth correctly enforced on all three LDAP endpoints: GetK8sClientsForRequest is called in every handler with a 401 return on failure — no service account fallback.
  • React Query hooks follow project patterns exactly: ldapKeys factory, enabled guard on minimum query length, staleTime matching the backend 5-minute cache TTL, clean api/ vs queries/ separation.
  • Strong test coverage for filter logic: client_test.go covers filter building, GitHub username parsing, cache TTL behaviour, and sanitization edge cases.
  • Kubernetes configuration is clean: all LDAP env vars use optional: true, so existing deployments without the ConfigMap are unaffected.

Recommendations

  1. [Critical] Set LDAP_SKIP_TLS_VERIFY: "false" or remove the key from production/ldap-config.yaml.
  2. [Critical] Replace the hand-rolled dropdown in ldap-autocomplete.tsx with Shadcn Popover + Command + CommandList + CommandItem.
  3. [Major] Add a periodic cache sweeper in ldap/client.go NewClient to evict expired entries.
  4. [Minor] Destructure and use isLoading from useWorkspaceFlag in sharing-section.tsx.
  5. [Minor] Reduce the LDAP startup log verbosity in main.go.

Review generated by Claude Code using repository standards from .claude/context/ and .claude/patterns/.

@ambient-code
Copy link

ambient-code bot commented Mar 11, 2026

Review Queue — Blockers Found

Check Status Detail
CI pass
Merge conflicts pass
Review comments FAIL 16 inline threads on components/frontend/src/components/workspace-sections/tests/sharing-section.test.tsx, components/backend/ldap/client_test.go
Jira hygiene pass
Fork PR pass ---
Staleness pass

This comment is auto-generated by the Review Queue workflow and will be updated when the PR changes.

@ambient-code ambient-code bot added this to the Merge Queue milestone Mar 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants